Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

Submitted by: Peng Su

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data

In [2]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import tensorflow as tf
import cv2
import seaborn as sns
import tensorflow as tf
from sklearn.utils import shuffle
from tensorflow.contrib.layers import flatten
import shutil

%matplotlib inline
In [3]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data

training_file = '../traffic-signs-data/train.p'
validation_file = '../traffic-signs-data/valid.p'
testing_file = '../traffic-signs-data/test.p'
signnames_file = './signnames.csv'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
signnames = pd.read_csv(signnames_file)

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas

In [4]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results

# TODO: Number of training examples
n_train = X_train.shape[0]

# TODO: Number of validation examples
n_validation = X_valid.shape[0]

# TODO: Number of testing examples.
n_test = X_test.shape[0]

# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape

# TODO: How many unique classes/labels there are in the dataset.
n_classes_train = len(np.unique(y_train))
n_classes_valid = len(np.unique(y_valid))
n_classes_test = len(np.unique(y_test))

print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes train =", n_classes_train)
print("Number of classes valid =", n_classes_valid)
print("Number of classes test =", n_classes_test)


# make sure that train, valid, and test all cover the same 
assert n_classes_train == n_classes_valid == n_classes_test
n_classes = n_classes_train
Number of training examples = 34799
Number of validation examples = 4410
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes train = 43
Number of classes valid = 43
Number of classes test = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?

In [5]:
fig = plt.figure()
fig.set_size_inches(15,3)
ax = fig.add_subplot(111)
sns.countplot(y_train)
ylabels = ['{:.1f}K'.format(label/1000) for label in ax.get_yticks()]
for label in ax.xaxis.get_ticklabels()[0::2]:
    label.set_visible(False)
ax.set_xticklabels(ax.get_xticks(),fontsize=16)
ax.set_yticklabels(ylabels,fontsize=16)
ax.set_title('Train Data Count Distribution', fontsize=22)
ax.set_xlabel('Traffic Sign ID',fontsize=20)
ax.set_ylabel('Count',fontsize=20)
fig.savefig('./report_images/Train_Count_by_Sign.png')

fig = plt.figure()
fig.set_size_inches(15,3)
ax = fig.add_subplot(111)
sns.countplot(y_valid)
ylabels = ['{:.1f}K'.format(label/1000) for label in ax.get_yticks()]
for label in ax.xaxis.get_ticklabels()[0::2]:
    label.set_visible(False)
ax.set_xticklabels(ax.get_xticks(),fontsize=16)
ax.set_yticklabels(ylabels,fontsize=16)
ax.set_title('Valid Data Count Distribution', fontsize=22)
ax.set_xlabel('Traffic Sign ID',fontsize=20)
ax.set_ylabel('Count',fontsize=20)
fig.savefig('./report_images/Valid_Count_by_Sign.png')

fig = plt.figure()
fig.set_size_inches(15,3)
ax = fig.add_subplot(111)
sns.countplot(y_test)
ylabels = ['{:.1f}K'.format(label/1000) for label in ax.get_yticks()]
for label in ax.xaxis.get_ticklabels()[0::2]:
    label.set_visible(False)
ax.set_xticklabels(ax.get_xticks(),fontsize=16)
ax.set_yticklabels(ylabels,fontsize=16)
ax.set_title('Test Data Count Distribution', fontsize=22)
ax.set_xlabel('Traffic Sign ID',fontsize=20)
ax.set_ylabel('Count',fontsize=20)
fig.savefig('./report_images/Test_Count_by_Sign.png')

As shown above, train, valid, and test data have the same distribution of classes.

In [6]:
y_train_DF = pd.DataFrame(y_train, columns=['label'])
group_count = y_train_DF.groupby(['label']).size()
max_id = np.argmax(group_count)
max_count = np.max(group_count)
min_id = np.argmin(group_count)
print('In train data:')
print('Label {}, {}, has the max amount of data, {}'.format(max_id, signnames.iloc[max_id,1], group_count[max_id]))
print('Label {}, {}, has the min amount of data, {}'.format(min_id, signnames.iloc[min_id,1], group_count[min_id]))
In train data:
Label 2, Speed limit (50km/h), has the max amount of data, 2010
Label 0, Speed limit (20km/h), has the min amount of data, 180

I tried two ways of augmenting the data. 1) augmenting the under-sampled classes to have 2010 images 2) augmenting all the classes by 3X. But neither helped. So augmentation is not shown in this report.

In [7]:
sns.set_style("whitegrid", {'axes.grid' : False})
num_per_class = 5
for label,count in enumerate(group_count):
    perLabelData = X_train[y_train==label]
    print('Label {}, {}'.format(label,signnames.iloc[label,1]))
    plt.figure(figsize=(15,15))
    for i in range(num_per_class):
        img = perLabelData[np.random.randint(0, count)]
        plt.subplot(1,num_per_class,i+1)
        plt.imshow(img)
    plt.show()
Label 0, Speed limit (20km/h)
Label 1, Speed limit (30km/h)
Label 2, Speed limit (50km/h)
Label 3, Speed limit (60km/h)
Label 4, Speed limit (70km/h)
Label 5, Speed limit (80km/h)
Label 6, End of speed limit (80km/h)
Label 7, Speed limit (100km/h)
Label 8, Speed limit (120km/h)
Label 9, No passing
Label 10, No passing for vehicles over 3.5 metric tons
Label 11, Right-of-way at the next intersection
Label 12, Priority road
Label 13, Yield
Label 14, Stop
Label 15, No vehicles
Label 16, Vehicles over 3.5 metric tons prohibited
Label 17, No entry
Label 18, General caution
Label 19, Dangerous curve to the left
Label 20, Dangerous curve to the right
Label 21, Double curve
Label 22, Bumpy road
Label 23, Slippery road
Label 24, Road narrows on the right
Label 25, Road work
Label 26, Traffic signals
Label 27, Pedestrians
Label 28, Children crossing
Label 29, Bicycles crossing
Label 30, Beware of ice/snow
Label 31, Wild animals crossing
Label 32, End of all speed and passing limits
Label 33, Turn right ahead
Label 34, Turn left ahead
Label 35, Ahead only
Label 36, Go straight or right
Label 37, Go straight or left
Label 38, Keep right
Label 39, Keep left
Label 40, Roundabout mandatory
Label 41, End of no passing
Label 42, End of no passing by vehicles over 3.5 metric tons

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture (is the network over or underfitting?)
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

Pre-process the Data Set (normalization, grayscale, etc.)

Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.

Other pre-processing steps are optional. You can try different techniques to see if it improves performance.

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.

In [7]:
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include 
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.

def normalize(img):
    return (img - 128.)/128.

# Conver to LAB space, apply clahe on the L channel, and convert back to RGB
# Improve the illumination contrast by doing CLAHE (Contrast Limited Adaptive Histogram Equalization)

def hist_eq(img):
    if (len(img.shape)>3): # a collection of images are passed in
        num_images = img.shape[0]
        image_shape = img.shape[1:]
        locEqImg = np.zeros([num_images,image_shape[0],image_shape[1],image_shape[2]])
        clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(6,6))
        for i in range(num_images):
            currImg = img[i].squeeze()
            img_lab = cv2.cvtColor(currImg, cv2.COLOR_RGB2LAB)
            img_lab[:,:,0]=clahe.apply(img_lab[:,:,0])
            tmp = cv2.cvtColor(img_lab, cv2.COLOR_LAB2RGB)
            locEqImg[i] = tmp
    else:
        clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(6,6))
        img_lab = cv2.cvtColor(img, cv2.COLOR_RGB2LAB)
        img_lab[:,:,0]=clahe.apply(img_lab[:,:,0])
        tmp = cv2.cvtColor(img_lab, cv2.COLOR_LAB2RGB)
        locEqImg = tmp
    return locEqImg
In [19]:
print("Demo the effect of doing CLAHE")
sns.set_style("whitegrid", {'axes.grid' : False})
fig = plt.figure(figsize=(15,6))
for i in range(5):
    img = X_train[np.random.randint(0, X_train.shape[0])]
    plt.subplot(2,5,i+1)
    plt.imshow(img)
    img_hist = hist_eq(img)
    plt.subplot(2,5,5 + i+1)
    plt.imshow(img_hist)
plt.show()
Demo the effect of doing CLAHE
In [8]:
X_train = hist_eq(X_train)
X_train = normalize(X_train)
X_train = X_train.astype('float32')

X_valid = hist_eq(X_valid)
X_valid = normalize(X_valid)
X_valid = X_valid.astype('float32')

X_test = hist_eq(X_test)
X_test = normalize(X_test)
X_test = X_test.astype('float32')

Model Architecture

I concatenated the stage1 and stage2 pooling outputs as input to the fully-connected layer, following this paper. It does not improve the performance though. So I did not use that architecture.

In [ ]:
### Define your architecture here.
### Feel free to use as many code cells as needed.

# I used the LeNet architecture from the Udacity lecture.
# In order to draw the feature maps, the convolutions cannot be inside a function. So the architecture is 
# shown in the following cell. 

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.

In [9]:
## Hyper parameters
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
keep_prob = tf.placeholder(tf.float32)

rate = 0.001

# Below is the LeNet architecture
mu = 0
sigma = 0.1
weights = {
    'wc1': tf.Variable(tf.truncated_normal([5,5,3,6], mu, sigma)),
    'wc2': tf.Variable(tf.truncated_normal([5,5,6,16], mu, sigma)),
    'wf3': tf.Variable(tf.truncated_normal([5*5*16, 120], mu, sigma)),
    'wf4': tf.Variable(tf.truncated_normal([120, 84], mu, sigma)),
    'wf5': tf.Variable(tf.truncated_normal([84, 43], mu, sigma))}
biases = {
    'bc1': tf.zeros([6]),
    'bc2': tf.zeros([16]),
    'bf3': tf.zeros([120]),
    'bf4': tf.zeros([84]),
    'bf5': tf.zeros([43]),
}
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1 = tf.nn.conv2d(x, weights['wc1'], strides = [1, 1, 1, 1], padding = 'VALID')
conv1 = tf.nn.bias_add(conv1, biases['bc1'])
# Activation.
conv1 = tf.nn.relu(conv1)    
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = 'VALID')
conv1 = tf.nn.dropout(conv1, keep_prob)

# Layer 2: Convolutional. Output = 10x10x16.
conv2 = tf.nn.conv2d(conv1, weights['wc2'], strides = [1, 1, 1, 1], padding = 'VALID')
conv2 = tf.nn.bias_add(conv2, biases['bc2'])    
# Activation.
conv2 = tf.nn.relu(conv2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = 'VALID')
conv2 = tf.nn.dropout(conv2, keep_prob)
# Flatten. Input = 5x5x8. Output = 200.
flat = flatten(conv2, [-1, 5*5*16])

# Layer 3: Fully Connected. Input = 400. Output = 120.
fc3 = tf.add(tf.matmul(flat, weights['wf3']), biases['bf3'])
# Activation.
fc3 = tf.nn.relu(fc3)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc4 = tf.add(tf.matmul(fc3, weights['wf4']), biases['bf4'])
# Activation.
fc4 = tf.nn.relu(fc4)
# Layer 5: Fully Connected. Input = 84. Output = 43.
logits = tf.add(tf.matmul(fc4, weights['wf5']), biases['bf5'])
# The LeNet architecture ending

predictions = tf.argmax(logits, 1)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
In [10]:
saver = tf.train.Saver(max_to_keep=100)

def evaluate(X_data, y_data):
    num_examples = len(X_data)
    total_accuracy = 0
    total_loss = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        loss, accuracy = sess.run([loss_operation, accuracy_operation], feed_dict={x: batch_x, y: batch_y, keep_prob:1.0})
        total_accuracy += (accuracy * len(batch_x))
        total_loss += (loss * len(batch_x))
    return total_loss / num_examples, total_accuracy / num_examples
In [11]:
import shutil
save_dir = 'LeNetFinal'
out_dir = out_dir = os.path.abspath(os.path.join(os.path.curdir, "../Traffic-Sign-Classifier-runs", save_dir))

train_summary_dir = os.path.join(out_dir, "train")        
valid_summary_dir = os.path.join(out_dir, "valid")
checkpoint_dir = os.path.join(out_dir, "checkpoints")

checkpoint_prefix = os.path.join(checkpoint_dir,"model")
checkpoint_every = 100
train_summary_every = 50
valid_summary_every = 100

EPOCHS = 80
BATCH_SIZE = 128
In [12]:
if os.path.exists(out_dir):
    shutil.rmtree(out_dir)
if not os.path.exists(checkpoint_dir):
    os.makedirs(checkpoint_dir)
    
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    num_examples = len(X_train)
    
    print("Training...")
    print()
    global_step = 0
    train_summary_writer = tf.summary.FileWriter(train_summary_dir, sess.graph)
    valid_summary_writer = tf.summary.FileWriter(valid_summary_dir, sess.graph)
    for i in range(EPOCHS):
        X_train, y_train = shuffle(X_train, y_train)
        for offset in range(0, num_examples, BATCH_SIZE):
            end = offset + BATCH_SIZE
            batch_x, batch_y = X_train[offset:end], y_train[offset:end]
            sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob:0.8})
            global_step += 1
            if global_step % train_summary_every == 0:
                train_loss, train_accuracy = evaluate(batch_x, batch_y)
                train_summaries = tf.Summary()
                train_summaries.value.add(tag='Train Loss', simple_value=train_loss)
                train_summaries.value.add(tag='Train Accuracy', simple_value=train_accuracy)
                train_summary_writer.add_summary(train_summaries, global_step)
                print("Batch {} Train Loss: {:.3f} Train Accuracy: {:.3f}".format(global_step, train_loss, train_accuracy))
                print()
            if global_step % valid_summary_every == 0:
                validation_loss, validation_accuracy = evaluate(X_valid, y_valid)
                valid_summaries = tf.Summary()
                valid_summaries.value.add(tag='Validation Loss', simple_value=validation_loss)
                valid_summaries.value.add(tag='Validation Accuracy', simple_value=validation_accuracy)
                print("valid writing")
                valid_summary_writer.add_summary(valid_summaries, global_step)
                print("EPOCH {} Batch {} ...".format(i+1, global_step))
                print("Validation loss = {:.3f}".format(validation_loss))
                print("Validation Accuracy = {:.3f}".format(validation_accuracy))
                print()
            if global_step % checkpoint_every == 0:
                saver.save(sess, checkpoint_prefix, global_step=global_step)            
    print("Training Done")
Training...

Batch 50 Train Loss: 2.471 Train Accuracy: 0.414

Batch 100 Train Loss: 1.122 Train Accuracy: 0.680

valid writing
EPOCH 1 Batch 100 ...
Validation loss = 1.441
Validation Accuracy = 0.593

Batch 150 Train Loss: 0.620 Train Accuracy: 0.836

Batch 200 Train Loss: 0.491 Train Accuracy: 0.844

valid writing
EPOCH 1 Batch 200 ...
Validation loss = 0.772
Validation Accuracy = 0.759

Batch 250 Train Loss: 0.352 Train Accuracy: 0.938

Batch 300 Train Loss: 0.417 Train Accuracy: 0.875

valid writing
EPOCH 2 Batch 300 ...
Validation loss = 0.509
Validation Accuracy = 0.850

Batch 350 Train Loss: 0.179 Train Accuracy: 0.977

Batch 400 Train Loss: 0.203 Train Accuracy: 0.961

valid writing
EPOCH 2 Batch 400 ...
Validation loss = 0.404
Validation Accuracy = 0.892

Batch 450 Train Loss: 0.187 Train Accuracy: 0.961

Batch 500 Train Loss: 0.206 Train Accuracy: 0.922

valid writing
EPOCH 2 Batch 500 ...
Validation loss = 0.354
Validation Accuracy = 0.896

Batch 550 Train Loss: 0.208 Train Accuracy: 0.945

Batch 600 Train Loss: 0.206 Train Accuracy: 0.953

valid writing
EPOCH 3 Batch 600 ...
Validation loss = 0.328
Validation Accuracy = 0.906

Batch 650 Train Loss: 0.150 Train Accuracy: 0.961

Batch 700 Train Loss: 0.131 Train Accuracy: 0.945

valid writing
EPOCH 3 Batch 700 ...
Validation loss = 0.257
Validation Accuracy = 0.922

Batch 750 Train Loss: 0.154 Train Accuracy: 0.977

Batch 800 Train Loss: 0.076 Train Accuracy: 0.984

valid writing
EPOCH 3 Batch 800 ...
Validation loss = 0.282
Validation Accuracy = 0.915

Batch 850 Train Loss: 0.051 Train Accuracy: 0.992

Batch 900 Train Loss: 0.110 Train Accuracy: 0.969

valid writing
EPOCH 4 Batch 900 ...
Validation loss = 0.273
Validation Accuracy = 0.922

Batch 950 Train Loss: 0.097 Train Accuracy: 0.961

Batch 1000 Train Loss: 0.078 Train Accuracy: 0.969

valid writing
EPOCH 4 Batch 1000 ...
Validation loss = 0.265
Validation Accuracy = 0.923

Batch 1050 Train Loss: 0.093 Train Accuracy: 0.977

Batch 1100 Train Loss: 0.055 Train Accuracy: 0.992

valid writing
EPOCH 5 Batch 1100 ...
Validation loss = 0.249
Validation Accuracy = 0.934

Batch 1150 Train Loss: 0.101 Train Accuracy: 0.977

Batch 1200 Train Loss: 0.060 Train Accuracy: 0.984

valid writing
EPOCH 5 Batch 1200 ...
Validation loss = 0.236
Validation Accuracy = 0.934

Batch 1250 Train Loss: 0.056 Train Accuracy: 0.984

Batch 1300 Train Loss: 0.054 Train Accuracy: 0.992

valid writing
EPOCH 5 Batch 1300 ...
Validation loss = 0.244
Validation Accuracy = 0.929

Batch 1350 Train Loss: 0.040 Train Accuracy: 0.992

Batch 1400 Train Loss: 0.023 Train Accuracy: 1.000

valid writing
EPOCH 6 Batch 1400 ...
Validation loss = 0.226
Validation Accuracy = 0.935

Batch 1450 Train Loss: 0.074 Train Accuracy: 0.977

Batch 1500 Train Loss: 0.085 Train Accuracy: 0.984

valid writing
EPOCH 6 Batch 1500 ...
Validation loss = 0.230
Validation Accuracy = 0.934

Batch 1550 Train Loss: 0.038 Train Accuracy: 0.992

Batch 1600 Train Loss: 0.049 Train Accuracy: 0.992

valid writing
EPOCH 6 Batch 1600 ...
Validation loss = 0.216
Validation Accuracy = 0.940

Batch 1650 Train Loss: 0.068 Train Accuracy: 0.992

Batch 1700 Train Loss: 0.019 Train Accuracy: 0.992

valid writing
EPOCH 7 Batch 1700 ...
Validation loss = 0.201
Validation Accuracy = 0.947

Batch 1750 Train Loss: 0.021 Train Accuracy: 1.000

Batch 1800 Train Loss: 0.038 Train Accuracy: 0.984

valid writing
EPOCH 7 Batch 1800 ...
Validation loss = 0.222
Validation Accuracy = 0.941

Batch 1850 Train Loss: 0.018 Train Accuracy: 1.000

Batch 1900 Train Loss: 0.022 Train Accuracy: 1.000

valid writing
EPOCH 7 Batch 1900 ...
Validation loss = 0.211
Validation Accuracy = 0.946

Batch 1950 Train Loss: 0.015 Train Accuracy: 1.000

Batch 2000 Train Loss: 0.015 Train Accuracy: 1.000

valid writing
EPOCH 8 Batch 2000 ...
Validation loss = 0.210
Validation Accuracy = 0.944

Batch 2050 Train Loss: 0.053 Train Accuracy: 0.977

Batch 2100 Train Loss: 0.014 Train Accuracy: 1.000

valid writing
EPOCH 8 Batch 2100 ...
Validation loss = 0.193
Validation Accuracy = 0.951

Batch 2150 Train Loss: 0.023 Train Accuracy: 0.992

Batch 2200 Train Loss: 0.022 Train Accuracy: 0.992

valid writing
EPOCH 9 Batch 2200 ...
Validation loss = 0.195
Validation Accuracy = 0.949

Batch 2250 Train Loss: 0.011 Train Accuracy: 1.000

Batch 2300 Train Loss: 0.040 Train Accuracy: 0.984

valid writing
EPOCH 9 Batch 2300 ...
Validation loss = 0.238
Validation Accuracy = 0.937

Batch 2350 Train Loss: 0.021 Train Accuracy: 1.000

Batch 2400 Train Loss: 0.024 Train Accuracy: 0.984

valid writing
EPOCH 9 Batch 2400 ...
Validation loss = 0.210
Validation Accuracy = 0.946

Batch 2450 Train Loss: 0.008 Train Accuracy: 1.000

Batch 2500 Train Loss: 0.018 Train Accuracy: 1.000

valid writing
EPOCH 10 Batch 2500 ...
Validation loss = 0.212
Validation Accuracy = 0.947

Batch 2550 Train Loss: 0.018 Train Accuracy: 1.000

Batch 2600 Train Loss: 0.023 Train Accuracy: 0.992

valid writing
EPOCH 10 Batch 2600 ...
Validation loss = 0.192
Validation Accuracy = 0.953

Batch 2650 Train Loss: 0.028 Train Accuracy: 0.984

Batch 2700 Train Loss: 0.014 Train Accuracy: 1.000

valid writing
EPOCH 10 Batch 2700 ...
Validation loss = 0.193
Validation Accuracy = 0.949

Batch 2750 Train Loss: 0.008 Train Accuracy: 1.000

Batch 2800 Train Loss: 0.024 Train Accuracy: 0.992

valid writing
EPOCH 11 Batch 2800 ...
Validation loss = 0.219
Validation Accuracy = 0.937

Batch 2850 Train Loss: 0.051 Train Accuracy: 0.992

Batch 2900 Train Loss: 0.028 Train Accuracy: 0.992

valid writing
EPOCH 11 Batch 2900 ...
Validation loss = 0.197
Validation Accuracy = 0.952

Batch 2950 Train Loss: 0.071 Train Accuracy: 0.992

Batch 3000 Train Loss: 0.031 Train Accuracy: 0.984

valid writing
EPOCH 12 Batch 3000 ...
Validation loss = 0.203
Validation Accuracy = 0.958

Batch 3050 Train Loss: 0.028 Train Accuracy: 0.984

Batch 3100 Train Loss: 0.014 Train Accuracy: 1.000

valid writing
EPOCH 12 Batch 3100 ...
Validation loss = 0.183
Validation Accuracy = 0.956

Batch 3150 Train Loss: 0.012 Train Accuracy: 1.000

Batch 3200 Train Loss: 0.035 Train Accuracy: 0.984

valid writing
EPOCH 12 Batch 3200 ...
Validation loss = 0.184
Validation Accuracy = 0.956

Batch 3250 Train Loss: 0.005 Train Accuracy: 1.000

Batch 3300 Train Loss: 0.016 Train Accuracy: 1.000

valid writing
EPOCH 13 Batch 3300 ...
Validation loss = 0.239
Validation Accuracy = 0.944

Batch 3350 Train Loss: 0.016 Train Accuracy: 0.992

Batch 3400 Train Loss: 0.006 Train Accuracy: 1.000

valid writing
EPOCH 13 Batch 3400 ...
Validation loss = 0.248
Validation Accuracy = 0.944

Batch 3450 Train Loss: 0.010 Train Accuracy: 1.000

Batch 3500 Train Loss: 0.005 Train Accuracy: 1.000

valid writing
EPOCH 13 Batch 3500 ...
Validation loss = 0.220
Validation Accuracy = 0.956

Batch 3550 Train Loss: 0.018 Train Accuracy: 1.000

Batch 3600 Train Loss: 0.022 Train Accuracy: 1.000

valid writing
EPOCH 14 Batch 3600 ...
Validation loss = 0.201
Validation Accuracy = 0.952

Batch 3650 Train Loss: 0.008 Train Accuracy: 1.000

Batch 3700 Train Loss: 0.008 Train Accuracy: 1.000

valid writing
EPOCH 14 Batch 3700 ...
Validation loss = 0.200
Validation Accuracy = 0.952

Batch 3750 Train Loss: 0.008 Train Accuracy: 1.000

Batch 3800 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 14 Batch 3800 ...
Validation loss = 0.171
Validation Accuracy = 0.958

Batch 3850 Train Loss: 0.015 Train Accuracy: 1.000

Batch 3900 Train Loss: 0.011 Train Accuracy: 1.000

valid writing
EPOCH 15 Batch 3900 ...
Validation loss = 0.163
Validation Accuracy = 0.956

Batch 3950 Train Loss: 0.006 Train Accuracy: 1.000

Batch 4000 Train Loss: 0.007 Train Accuracy: 1.000

valid writing
EPOCH 15 Batch 4000 ...
Validation loss = 0.201
Validation Accuracy = 0.953

Batch 4050 Train Loss: 0.004 Train Accuracy: 1.000

Batch 4100 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 16 Batch 4100 ...
Validation loss = 0.150
Validation Accuracy = 0.958

Batch 4150 Train Loss: 0.006 Train Accuracy: 1.000

Batch 4200 Train Loss: 0.012 Train Accuracy: 0.992

valid writing
EPOCH 16 Batch 4200 ...
Validation loss = 0.177
Validation Accuracy = 0.954

Batch 4250 Train Loss: 0.005 Train Accuracy: 1.000

Batch 4300 Train Loss: 0.012 Train Accuracy: 1.000

valid writing
EPOCH 16 Batch 4300 ...
Validation loss = 0.153
Validation Accuracy = 0.959

Batch 4350 Train Loss: 0.034 Train Accuracy: 0.984

Batch 4400 Train Loss: 0.014 Train Accuracy: 1.000

valid writing
EPOCH 17 Batch 4400 ...
Validation loss = 0.149
Validation Accuracy = 0.957

Batch 4450 Train Loss: 0.007 Train Accuracy: 1.000

Batch 4500 Train Loss: 0.010 Train Accuracy: 1.000

valid writing
EPOCH 17 Batch 4500 ...
Validation loss = 0.175
Validation Accuracy = 0.957

Batch 4550 Train Loss: 0.006 Train Accuracy: 1.000

Batch 4600 Train Loss: 0.005 Train Accuracy: 1.000

valid writing
EPOCH 17 Batch 4600 ...
Validation loss = 0.171
Validation Accuracy = 0.956

Batch 4650 Train Loss: 0.013 Train Accuracy: 1.000

Batch 4700 Train Loss: 0.009 Train Accuracy: 1.000

valid writing
EPOCH 18 Batch 4700 ...
Validation loss = 0.228
Validation Accuracy = 0.956

Batch 4750 Train Loss: 0.010 Train Accuracy: 1.000

Batch 4800 Train Loss: 0.008 Train Accuracy: 1.000

valid writing
EPOCH 18 Batch 4800 ...
Validation loss = 0.191
Validation Accuracy = 0.958

Batch 4850 Train Loss: 0.005 Train Accuracy: 1.000

Batch 4900 Train Loss: 0.014 Train Accuracy: 0.992

valid writing
EPOCH 19 Batch 4900 ...
Validation loss = 0.208
Validation Accuracy = 0.948

Batch 4950 Train Loss: 0.022 Train Accuracy: 0.992

Batch 5000 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 19 Batch 5000 ...
Validation loss = 0.194
Validation Accuracy = 0.957

Batch 5050 Train Loss: 0.005 Train Accuracy: 1.000

Batch 5100 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 19 Batch 5100 ...
Validation loss = 0.172
Validation Accuracy = 0.957

Batch 5150 Train Loss: 0.004 Train Accuracy: 1.000

Batch 5200 Train Loss: 0.023 Train Accuracy: 0.992

valid writing
EPOCH 20 Batch 5200 ...
Validation loss = 0.224
Validation Accuracy = 0.957

Batch 5250 Train Loss: 0.005 Train Accuracy: 1.000

Batch 5300 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 20 Batch 5300 ...
Validation loss = 0.198
Validation Accuracy = 0.957

Batch 5350 Train Loss: 0.002 Train Accuracy: 1.000

Batch 5400 Train Loss: 0.006 Train Accuracy: 1.000

valid writing
EPOCH 20 Batch 5400 ...
Validation loss = 0.232
Validation Accuracy = 0.949

Batch 5450 Train Loss: 0.005 Train Accuracy: 1.000

Batch 5500 Train Loss: 0.008 Train Accuracy: 1.000

valid writing
EPOCH 21 Batch 5500 ...
Validation loss = 0.246
Validation Accuracy = 0.948

Batch 5550 Train Loss: 0.004 Train Accuracy: 1.000

Batch 5600 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 21 Batch 5600 ...
Validation loss = 0.239
Validation Accuracy = 0.953

Batch 5650 Train Loss: 0.006 Train Accuracy: 1.000

Batch 5700 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 21 Batch 5700 ...
Validation loss = 0.236
Validation Accuracy = 0.951

Batch 5750 Train Loss: 0.012 Train Accuracy: 1.000

Batch 5800 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 22 Batch 5800 ...
Validation loss = 0.258
Validation Accuracy = 0.953

Batch 5850 Train Loss: 0.007 Train Accuracy: 1.000

Batch 5900 Train Loss: 0.013 Train Accuracy: 1.000

valid writing
EPOCH 22 Batch 5900 ...
Validation loss = 0.215
Validation Accuracy = 0.963

Batch 5950 Train Loss: 0.008 Train Accuracy: 1.000

Batch 6000 Train Loss: 0.014 Train Accuracy: 0.992

valid writing
EPOCH 23 Batch 6000 ...
Validation loss = 0.203
Validation Accuracy = 0.955

Batch 6050 Train Loss: 0.004 Train Accuracy: 1.000

Batch 6100 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 23 Batch 6100 ...
Validation loss = 0.214
Validation Accuracy = 0.957

Batch 6150 Train Loss: 0.008 Train Accuracy: 1.000

Batch 6200 Train Loss: 0.007 Train Accuracy: 1.000

valid writing
EPOCH 23 Batch 6200 ...
Validation loss = 0.190
Validation Accuracy = 0.955

Batch 6250 Train Loss: 0.001 Train Accuracy: 1.000

Batch 6300 Train Loss: 0.006 Train Accuracy: 1.000

valid writing
EPOCH 24 Batch 6300 ...
Validation loss = 0.184
Validation Accuracy = 0.956

Batch 6350 Train Loss: 0.007 Train Accuracy: 1.000

Batch 6400 Train Loss: 0.006 Train Accuracy: 1.000

valid writing
EPOCH 24 Batch 6400 ...
Validation loss = 0.225
Validation Accuracy = 0.952

Batch 6450 Train Loss: 0.003 Train Accuracy: 1.000

Batch 6500 Train Loss: 0.010 Train Accuracy: 0.992

valid writing
EPOCH 24 Batch 6500 ...
Validation loss = 0.203
Validation Accuracy = 0.960

Batch 6550 Train Loss: 0.015 Train Accuracy: 0.992

Batch 6600 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 25 Batch 6600 ...
Validation loss = 0.214
Validation Accuracy = 0.954

Batch 6650 Train Loss: 0.005 Train Accuracy: 1.000

Batch 6700 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 25 Batch 6700 ...
Validation loss = 0.196
Validation Accuracy = 0.957

Batch 6750 Train Loss: 0.002 Train Accuracy: 1.000

Batch 6800 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 25 Batch 6800 ...
Validation loss = 0.204
Validation Accuracy = 0.951

Batch 6850 Train Loss: 0.005 Train Accuracy: 1.000

Batch 6900 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 26 Batch 6900 ...
Validation loss = 0.246
Validation Accuracy = 0.951

Batch 6950 Train Loss: 0.004 Train Accuracy: 1.000

Batch 7000 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 26 Batch 7000 ...
Validation loss = 0.223
Validation Accuracy = 0.954

Batch 7050 Train Loss: 0.017 Train Accuracy: 0.984

Batch 7100 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 27 Batch 7100 ...
Validation loss = 0.180
Validation Accuracy = 0.960

Batch 7150 Train Loss: 0.005 Train Accuracy: 1.000

Batch 7200 Train Loss: 0.006 Train Accuracy: 1.000

valid writing
EPOCH 27 Batch 7200 ...
Validation loss = 0.178
Validation Accuracy = 0.959

Batch 7250 Train Loss: 0.005 Train Accuracy: 1.000

Batch 7300 Train Loss: 0.005 Train Accuracy: 1.000

valid writing
EPOCH 27 Batch 7300 ...
Validation loss = 0.197
Validation Accuracy = 0.964

Batch 7350 Train Loss: 0.001 Train Accuracy: 1.000

Batch 7400 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 28 Batch 7400 ...
Validation loss = 0.187
Validation Accuracy = 0.962

Batch 7450 Train Loss: 0.003 Train Accuracy: 1.000

Batch 7500 Train Loss: 0.012 Train Accuracy: 0.992

valid writing
EPOCH 28 Batch 7500 ...
Validation loss = 0.178
Validation Accuracy = 0.957

Batch 7550 Train Loss: 0.001 Train Accuracy: 1.000

Batch 7600 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 28 Batch 7600 ...
Validation loss = 0.169
Validation Accuracy = 0.961

Batch 7650 Train Loss: 0.002 Train Accuracy: 1.000

Batch 7700 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 29 Batch 7700 ...
Validation loss = 0.171
Validation Accuracy = 0.957

Batch 7750 Train Loss: 0.002 Train Accuracy: 1.000

Batch 7800 Train Loss: 0.016 Train Accuracy: 0.984

valid writing
EPOCH 29 Batch 7800 ...
Validation loss = 0.160
Validation Accuracy = 0.963

Batch 7850 Train Loss: 0.006 Train Accuracy: 1.000

Batch 7900 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 30 Batch 7900 ...
Validation loss = 0.186
Validation Accuracy = 0.957

Batch 7950 Train Loss: 0.004 Train Accuracy: 1.000

Batch 8000 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 30 Batch 8000 ...
Validation loss = 0.150
Validation Accuracy = 0.962

Batch 8050 Train Loss: 0.001 Train Accuracy: 1.000

Batch 8100 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 30 Batch 8100 ...
Validation loss = 0.187
Validation Accuracy = 0.961

Batch 8150 Train Loss: 0.003 Train Accuracy: 1.000

Batch 8200 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 31 Batch 8200 ...
Validation loss = 0.158
Validation Accuracy = 0.965

Batch 8250 Train Loss: 0.005 Train Accuracy: 1.000

Batch 8300 Train Loss: 0.007 Train Accuracy: 1.000

valid writing
EPOCH 31 Batch 8300 ...
Validation loss = 0.149
Validation Accuracy = 0.963

Batch 8350 Train Loss: 0.001 Train Accuracy: 1.000

Batch 8400 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 31 Batch 8400 ...
Validation loss = 0.198
Validation Accuracy = 0.966

Batch 8450 Train Loss: 0.002 Train Accuracy: 1.000

Batch 8500 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 32 Batch 8500 ...
Validation loss = 0.157
Validation Accuracy = 0.964

Batch 8550 Train Loss: 0.002 Train Accuracy: 1.000

Batch 8600 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 32 Batch 8600 ...
Validation loss = 0.218
Validation Accuracy = 0.960

Batch 8650 Train Loss: 0.009 Train Accuracy: 1.000

Batch 8700 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 32 Batch 8700 ...
Validation loss = 0.167
Validation Accuracy = 0.961

Batch 8750 Train Loss: 0.004 Train Accuracy: 1.000

Batch 8800 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 33 Batch 8800 ...
Validation loss = 0.200
Validation Accuracy = 0.962

Batch 8850 Train Loss: 0.006 Train Accuracy: 1.000

Batch 8900 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 33 Batch 8900 ...
Validation loss = 0.221
Validation Accuracy = 0.956

Batch 8950 Train Loss: 0.005 Train Accuracy: 1.000

Batch 9000 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 34 Batch 9000 ...
Validation loss = 0.204
Validation Accuracy = 0.963

Batch 9050 Train Loss: 0.003 Train Accuracy: 1.000

Batch 9100 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 34 Batch 9100 ...
Validation loss = 0.168
Validation Accuracy = 0.959

Batch 9150 Train Loss: 0.002 Train Accuracy: 1.000

Batch 9200 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 34 Batch 9200 ...
Validation loss = 0.203
Validation Accuracy = 0.960

Batch 9250 Train Loss: 0.001 Train Accuracy: 1.000

Batch 9300 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 35 Batch 9300 ...
Validation loss = 0.201
Validation Accuracy = 0.960

Batch 9350 Train Loss: 0.001 Train Accuracy: 1.000

Batch 9400 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 35 Batch 9400 ...
Validation loss = 0.192
Validation Accuracy = 0.961

Batch 9450 Train Loss: 0.002 Train Accuracy: 1.000

Batch 9500 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 35 Batch 9500 ...
Validation loss = 0.184
Validation Accuracy = 0.967

Batch 9550 Train Loss: 0.008 Train Accuracy: 0.992

Batch 9600 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 36 Batch 9600 ...
Validation loss = 0.172
Validation Accuracy = 0.963

Batch 9650 Train Loss: 0.005 Train Accuracy: 1.000

Batch 9700 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 36 Batch 9700 ...
Validation loss = 0.168
Validation Accuracy = 0.960

Batch 9750 Train Loss: 0.007 Train Accuracy: 1.000

Batch 9800 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 37 Batch 9800 ...
Validation loss = 0.160
Validation Accuracy = 0.963

Batch 9850 Train Loss: 0.001 Train Accuracy: 1.000

Batch 9900 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 37 Batch 9900 ...
Validation loss = 0.193
Validation Accuracy = 0.958

Batch 9950 Train Loss: 0.003 Train Accuracy: 1.000

Batch 10000 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 37 Batch 10000 ...
Validation loss = 0.152
Validation Accuracy = 0.964

Batch 10050 Train Loss: 0.003 Train Accuracy: 1.000

Batch 10100 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 38 Batch 10100 ...
Validation loss = 0.178
Validation Accuracy = 0.962

Batch 10150 Train Loss: 0.002 Train Accuracy: 1.000

Batch 10200 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 38 Batch 10200 ...
Validation loss = 0.162
Validation Accuracy = 0.963

Batch 10250 Train Loss: 0.000 Train Accuracy: 1.000

Batch 10300 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 38 Batch 10300 ...
Validation loss = 0.170
Validation Accuracy = 0.964

Batch 10350 Train Loss: 0.001 Train Accuracy: 1.000

Batch 10400 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 39 Batch 10400 ...
Validation loss = 0.159
Validation Accuracy = 0.965

Batch 10450 Train Loss: 0.003 Train Accuracy: 1.000

Batch 10500 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 39 Batch 10500 ...
Validation loss = 0.171
Validation Accuracy = 0.962

Batch 10550 Train Loss: 0.001 Train Accuracy: 1.000

Batch 10600 Train Loss: 0.005 Train Accuracy: 1.000

valid writing
EPOCH 39 Batch 10600 ...
Validation loss = 0.140
Validation Accuracy = 0.968

Batch 10650 Train Loss: 0.008 Train Accuracy: 1.000

Batch 10700 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 40 Batch 10700 ...
Validation loss = 0.142
Validation Accuracy = 0.968

Batch 10750 Train Loss: 0.003 Train Accuracy: 1.000

Batch 10800 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 40 Batch 10800 ...
Validation loss = 0.153
Validation Accuracy = 0.963

Batch 10850 Train Loss: 0.002 Train Accuracy: 1.000

Batch 10900 Train Loss: 0.006 Train Accuracy: 1.000

valid writing
EPOCH 41 Batch 10900 ...
Validation loss = 0.155
Validation Accuracy = 0.961

Batch 10950 Train Loss: 0.008 Train Accuracy: 1.000

Batch 11000 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 41 Batch 11000 ...
Validation loss = 0.142
Validation Accuracy = 0.961

Batch 11050 Train Loss: 0.001 Train Accuracy: 1.000

Batch 11100 Train Loss: 0.010 Train Accuracy: 1.000

valid writing
EPOCH 41 Batch 11100 ...
Validation loss = 0.155
Validation Accuracy = 0.965

Batch 11150 Train Loss: 0.000 Train Accuracy: 1.000

Batch 11200 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 42 Batch 11200 ...
Validation loss = 0.191
Validation Accuracy = 0.957

Batch 11250 Train Loss: 0.007 Train Accuracy: 1.000

Batch 11300 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 42 Batch 11300 ...
Validation loss = 0.161
Validation Accuracy = 0.958

Batch 11350 Train Loss: 0.002 Train Accuracy: 1.000

Batch 11400 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 42 Batch 11400 ...
Validation loss = 0.161
Validation Accuracy = 0.963

Batch 11450 Train Loss: 0.001 Train Accuracy: 1.000

Batch 11500 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 43 Batch 11500 ...
Validation loss = 0.145
Validation Accuracy = 0.965

Batch 11550 Train Loss: 0.001 Train Accuracy: 1.000

Batch 11600 Train Loss: 0.014 Train Accuracy: 1.000

valid writing
EPOCH 43 Batch 11600 ...
Validation loss = 0.158
Validation Accuracy = 0.965

Batch 11650 Train Loss: 0.006 Train Accuracy: 1.000

Batch 11700 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 44 Batch 11700 ...
Validation loss = 0.148
Validation Accuracy = 0.966

Batch 11750 Train Loss: 0.003 Train Accuracy: 1.000

Batch 11800 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 44 Batch 11800 ...
Validation loss = 0.134
Validation Accuracy = 0.969

Batch 11850 Train Loss: 0.001 Train Accuracy: 1.000

Batch 11900 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 44 Batch 11900 ...
Validation loss = 0.186
Validation Accuracy = 0.959

Batch 11950 Train Loss: 0.001 Train Accuracy: 1.000

Batch 12000 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 45 Batch 12000 ...
Validation loss = 0.159
Validation Accuracy = 0.962

Batch 12050 Train Loss: 0.003 Train Accuracy: 1.000

Batch 12100 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 45 Batch 12100 ...
Validation loss = 0.161
Validation Accuracy = 0.961

Batch 12150 Train Loss: 0.002 Train Accuracy: 1.000

Batch 12200 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 45 Batch 12200 ...
Validation loss = 0.159
Validation Accuracy = 0.960

Batch 12250 Train Loss: 0.014 Train Accuracy: 1.000

Batch 12300 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 46 Batch 12300 ...
Validation loss = 0.153
Validation Accuracy = 0.963

Batch 12350 Train Loss: 0.000 Train Accuracy: 1.000

Batch 12400 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 46 Batch 12400 ...
Validation loss = 0.154
Validation Accuracy = 0.961

Batch 12450 Train Loss: 0.002 Train Accuracy: 1.000

Batch 12500 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 46 Batch 12500 ...
Validation loss = 0.168
Validation Accuracy = 0.962

Batch 12550 Train Loss: 0.002 Train Accuracy: 1.000

Batch 12600 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 47 Batch 12600 ...
Validation loss = 0.186
Validation Accuracy = 0.961

Batch 12650 Train Loss: 0.001 Train Accuracy: 1.000

Batch 12700 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 47 Batch 12700 ...
Validation loss = 0.152
Validation Accuracy = 0.965

Batch 12750 Train Loss: 0.001 Train Accuracy: 1.000

Batch 12800 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 48 Batch 12800 ...
Validation loss = 0.148
Validation Accuracy = 0.967

Batch 12850 Train Loss: 0.008 Train Accuracy: 1.000

Batch 12900 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 48 Batch 12900 ...
Validation loss = 0.180
Validation Accuracy = 0.961

Batch 12950 Train Loss: 0.001 Train Accuracy: 1.000

Batch 13000 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 48 Batch 13000 ...
Validation loss = 0.166
Validation Accuracy = 0.964

Batch 13050 Train Loss: 0.002 Train Accuracy: 1.000

Batch 13100 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 49 Batch 13100 ...
Validation loss = 0.129
Validation Accuracy = 0.965

Batch 13150 Train Loss: 0.000 Train Accuracy: 1.000

Batch 13200 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 49 Batch 13200 ...
Validation loss = 0.151
Validation Accuracy = 0.966

Batch 13250 Train Loss: 0.000 Train Accuracy: 1.000

Batch 13300 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 49 Batch 13300 ...
Validation loss = 0.165
Validation Accuracy = 0.964

Batch 13350 Train Loss: 0.005 Train Accuracy: 1.000

Batch 13400 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 50 Batch 13400 ...
Validation loss = 0.164
Validation Accuracy = 0.963

Batch 13450 Train Loss: 0.001 Train Accuracy: 1.000

Batch 13500 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 50 Batch 13500 ...
Validation loss = 0.135
Validation Accuracy = 0.966

Batch 13550 Train Loss: 0.003 Train Accuracy: 1.000

Batch 13600 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 50 Batch 13600 ...
Validation loss = 0.167
Validation Accuracy = 0.962

Batch 13650 Train Loss: 0.006 Train Accuracy: 1.000

Batch 13700 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 51 Batch 13700 ...
Validation loss = 0.125
Validation Accuracy = 0.966

Batch 13750 Train Loss: 0.000 Train Accuracy: 1.000

Batch 13800 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 51 Batch 13800 ...
Validation loss = 0.160
Validation Accuracy = 0.963

Batch 13850 Train Loss: 0.002 Train Accuracy: 1.000

Batch 13900 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 52 Batch 13900 ...
Validation loss = 0.152
Validation Accuracy = 0.963

Batch 13950 Train Loss: 0.003 Train Accuracy: 1.000

Batch 14000 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 52 Batch 14000 ...
Validation loss = 0.201
Validation Accuracy = 0.956

Batch 14050 Train Loss: 0.001 Train Accuracy: 1.000

Batch 14100 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 52 Batch 14100 ...
Validation loss = 0.149
Validation Accuracy = 0.964

Batch 14150 Train Loss: 0.000 Train Accuracy: 1.000

Batch 14200 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 53 Batch 14200 ...
Validation loss = 0.179
Validation Accuracy = 0.961

Batch 14250 Train Loss: 0.001 Train Accuracy: 1.000

Batch 14300 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 53 Batch 14300 ...
Validation loss = 0.164
Validation Accuracy = 0.967

Batch 14350 Train Loss: 0.002 Train Accuracy: 1.000

Batch 14400 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 53 Batch 14400 ...
Validation loss = 0.152
Validation Accuracy = 0.966

Batch 14450 Train Loss: 0.001 Train Accuracy: 1.000

Batch 14500 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 54 Batch 14500 ...
Validation loss = 0.147
Validation Accuracy = 0.966

Batch 14550 Train Loss: 0.004 Train Accuracy: 1.000

Batch 14600 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 54 Batch 14600 ...
Validation loss = 0.204
Validation Accuracy = 0.957

Batch 14650 Train Loss: 0.005 Train Accuracy: 1.000

Batch 14700 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 55 Batch 14700 ...
Validation loss = 0.206
Validation Accuracy = 0.959

Batch 14750 Train Loss: 0.001 Train Accuracy: 1.000

Batch 14800 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 55 Batch 14800 ...
Validation loss = 0.169
Validation Accuracy = 0.965

Batch 14850 Train Loss: 0.000 Train Accuracy: 1.000

Batch 14900 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 55 Batch 14900 ...
Validation loss = 0.151
Validation Accuracy = 0.964

Batch 14950 Train Loss: 0.010 Train Accuracy: 0.992

Batch 15000 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 56 Batch 15000 ...
Validation loss = 0.165
Validation Accuracy = 0.965

Batch 15050 Train Loss: 0.000 Train Accuracy: 1.000

Batch 15100 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 56 Batch 15100 ...
Validation loss = 0.136
Validation Accuracy = 0.963

Batch 15150 Train Loss: 0.001 Train Accuracy: 1.000

Batch 15200 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 56 Batch 15200 ...
Validation loss = 0.164
Validation Accuracy = 0.962

Batch 15250 Train Loss: 0.007 Train Accuracy: 1.000

Batch 15300 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 57 Batch 15300 ...
Validation loss = 0.165
Validation Accuracy = 0.965

Batch 15350 Train Loss: 0.003 Train Accuracy: 1.000

Batch 15400 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 57 Batch 15400 ...
Validation loss = 0.165
Validation Accuracy = 0.964

Batch 15450 Train Loss: 0.000 Train Accuracy: 1.000

Batch 15500 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 57 Batch 15500 ...
Validation loss = 0.190
Validation Accuracy = 0.965

Batch 15550 Train Loss: 0.001 Train Accuracy: 1.000

Batch 15600 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 58 Batch 15600 ...
Validation loss = 0.173
Validation Accuracy = 0.968

Batch 15650 Train Loss: 0.001 Train Accuracy: 1.000

Batch 15700 Train Loss: 0.016 Train Accuracy: 0.992

valid writing
EPOCH 58 Batch 15700 ...
Validation loss = 0.205
Validation Accuracy = 0.960

Batch 15750 Train Loss: 0.003 Train Accuracy: 1.000

Batch 15800 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 59 Batch 15800 ...
Validation loss = 0.163
Validation Accuracy = 0.963

Batch 15850 Train Loss: 0.000 Train Accuracy: 1.000

Batch 15900 Train Loss: 0.006 Train Accuracy: 1.000

valid writing
EPOCH 59 Batch 15900 ...
Validation loss = 0.113
Validation Accuracy = 0.973

Batch 15950 Train Loss: 0.002 Train Accuracy: 1.000

Batch 16000 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 59 Batch 16000 ...
Validation loss = 0.176
Validation Accuracy = 0.964

Batch 16050 Train Loss: 0.000 Train Accuracy: 1.000

Batch 16100 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 60 Batch 16100 ...
Validation loss = 0.142
Validation Accuracy = 0.967

Batch 16150 Train Loss: 0.003 Train Accuracy: 1.000

Batch 16200 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 60 Batch 16200 ...
Validation loss = 0.179
Validation Accuracy = 0.961

Batch 16250 Train Loss: 0.002 Train Accuracy: 1.000

Batch 16300 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 60 Batch 16300 ...
Validation loss = 0.159
Validation Accuracy = 0.963

Batch 16350 Train Loss: 0.001 Train Accuracy: 1.000

Batch 16400 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 61 Batch 16400 ...
Validation loss = 0.193
Validation Accuracy = 0.961

Batch 16450 Train Loss: 0.000 Train Accuracy: 1.000

Batch 16500 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 61 Batch 16500 ...
Validation loss = 0.169
Validation Accuracy = 0.962

Batch 16550 Train Loss: 0.002 Train Accuracy: 1.000

Batch 16600 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 62 Batch 16600 ...
Validation loss = 0.160
Validation Accuracy = 0.963

Batch 16650 Train Loss: 0.000 Train Accuracy: 1.000

Batch 16700 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 62 Batch 16700 ...
Validation loss = 0.179
Validation Accuracy = 0.966

Batch 16750 Train Loss: 0.001 Train Accuracy: 1.000

Batch 16800 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 62 Batch 16800 ...
Validation loss = 0.112
Validation Accuracy = 0.975

Batch 16850 Train Loss: 0.003 Train Accuracy: 1.000

Batch 16900 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 63 Batch 16900 ...
Validation loss = 0.157
Validation Accuracy = 0.969

Batch 16950 Train Loss: 0.001 Train Accuracy: 1.000

Batch 17000 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 63 Batch 17000 ...
Validation loss = 0.158
Validation Accuracy = 0.962

Batch 17050 Train Loss: 0.000 Train Accuracy: 1.000

Batch 17100 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 63 Batch 17100 ...
Validation loss = 0.157
Validation Accuracy = 0.960

Batch 17150 Train Loss: 0.000 Train Accuracy: 1.000

Batch 17200 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 64 Batch 17200 ...
Validation loss = 0.144
Validation Accuracy = 0.961

Batch 17250 Train Loss: 0.002 Train Accuracy: 1.000

Batch 17300 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 64 Batch 17300 ...
Validation loss = 0.170
Validation Accuracy = 0.956

Batch 17350 Train Loss: 0.001 Train Accuracy: 1.000

Batch 17400 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 64 Batch 17400 ...
Validation loss = 0.207
Validation Accuracy = 0.959

Batch 17450 Train Loss: 0.005 Train Accuracy: 1.000

Batch 17500 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 65 Batch 17500 ...
Validation loss = 0.130
Validation Accuracy = 0.970

Batch 17550 Train Loss: 0.000 Train Accuracy: 1.000

Batch 17600 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 65 Batch 17600 ...
Validation loss = 0.201
Validation Accuracy = 0.961

Batch 17650 Train Loss: 0.001 Train Accuracy: 1.000

Batch 17700 Train Loss: 0.010 Train Accuracy: 0.992

valid writing
EPOCH 66 Batch 17700 ...
Validation loss = 0.230
Validation Accuracy = 0.956

Batch 17750 Train Loss: 0.000 Train Accuracy: 1.000

Batch 17800 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 66 Batch 17800 ...
Validation loss = 0.156
Validation Accuracy = 0.962

Batch 17850 Train Loss: 0.001 Train Accuracy: 1.000

Batch 17900 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 66 Batch 17900 ...
Validation loss = 0.228
Validation Accuracy = 0.954

Batch 17950 Train Loss: 0.002 Train Accuracy: 1.000

Batch 18000 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 67 Batch 18000 ...
Validation loss = 0.151
Validation Accuracy = 0.962

Batch 18050 Train Loss: 0.005 Train Accuracy: 1.000

Batch 18100 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 67 Batch 18100 ...
Validation loss = 0.164
Validation Accuracy = 0.959

Batch 18150 Train Loss: 0.000 Train Accuracy: 1.000

Batch 18200 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 67 Batch 18200 ...
Validation loss = 0.201
Validation Accuracy = 0.962

Batch 18250 Train Loss: 0.001 Train Accuracy: 1.000

Batch 18300 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 68 Batch 18300 ...
Validation loss = 0.154
Validation Accuracy = 0.964

Batch 18350 Train Loss: 0.000 Train Accuracy: 1.000

Batch 18400 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 68 Batch 18400 ...
Validation loss = 0.212
Validation Accuracy = 0.963

Batch 18450 Train Loss: 0.000 Train Accuracy: 1.000

Batch 18500 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 69 Batch 18500 ...
Validation loss = 0.151
Validation Accuracy = 0.966

Batch 18550 Train Loss: 0.000 Train Accuracy: 1.000

Batch 18600 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 69 Batch 18600 ...
Validation loss = 0.169
Validation Accuracy = 0.961

Batch 18650 Train Loss: 0.000 Train Accuracy: 1.000

Batch 18700 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 69 Batch 18700 ...
Validation loss = 0.160
Validation Accuracy = 0.963

Batch 18750 Train Loss: 0.001 Train Accuracy: 1.000

Batch 18800 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 70 Batch 18800 ...
Validation loss = 0.189
Validation Accuracy = 0.959

Batch 18850 Train Loss: 0.000 Train Accuracy: 1.000

Batch 18900 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 70 Batch 18900 ...
Validation loss = 0.168
Validation Accuracy = 0.962

Batch 18950 Train Loss: 0.000 Train Accuracy: 1.000

Batch 19000 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 70 Batch 19000 ...
Validation loss = 0.141
Validation Accuracy = 0.966

Batch 19050 Train Loss: 0.001 Train Accuracy: 1.000

Batch 19100 Train Loss: 0.011 Train Accuracy: 1.000

valid writing
EPOCH 71 Batch 19100 ...
Validation loss = 0.135
Validation Accuracy = 0.963

Batch 19150 Train Loss: 0.001 Train Accuracy: 1.000

Batch 19200 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 71 Batch 19200 ...
Validation loss = 0.163
Validation Accuracy = 0.964

Batch 19250 Train Loss: 0.006 Train Accuracy: 1.000

Batch 19300 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 71 Batch 19300 ...
Validation loss = 0.167
Validation Accuracy = 0.963

Batch 19350 Train Loss: 0.000 Train Accuracy: 1.000

Batch 19400 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 72 Batch 19400 ...
Validation loss = 0.144
Validation Accuracy = 0.966

Batch 19450 Train Loss: 0.001 Train Accuracy: 1.000

Batch 19500 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 72 Batch 19500 ...
Validation loss = 0.186
Validation Accuracy = 0.968

Batch 19550 Train Loss: 0.001 Train Accuracy: 1.000

Batch 19600 Train Loss: 0.002 Train Accuracy: 1.000

valid writing
EPOCH 73 Batch 19600 ...
Validation loss = 0.169
Validation Accuracy = 0.965

Batch 19650 Train Loss: 0.000 Train Accuracy: 1.000

Batch 19700 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 73 Batch 19700 ...
Validation loss = 0.168
Validation Accuracy = 0.963

Batch 19750 Train Loss: 0.001 Train Accuracy: 1.000

Batch 19800 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 73 Batch 19800 ...
Validation loss = 0.131
Validation Accuracy = 0.968

Batch 19850 Train Loss: 0.002 Train Accuracy: 1.000

Batch 19900 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 74 Batch 19900 ...
Validation loss = 0.201
Validation Accuracy = 0.962

Batch 19950 Train Loss: 0.000 Train Accuracy: 1.000

Batch 20000 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 74 Batch 20000 ...
Validation loss = 0.149
Validation Accuracy = 0.968

Batch 20050 Train Loss: 0.000 Train Accuracy: 1.000

Batch 20100 Train Loss: 0.006 Train Accuracy: 0.992

valid writing
EPOCH 74 Batch 20100 ...
Validation loss = 0.183
Validation Accuracy = 0.962

Batch 20150 Train Loss: 0.000 Train Accuracy: 1.000

Batch 20200 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 75 Batch 20200 ...
Validation loss = 0.166
Validation Accuracy = 0.968

Batch 20250 Train Loss: 0.000 Train Accuracy: 1.000

Batch 20300 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 75 Batch 20300 ...
Validation loss = 0.197
Validation Accuracy = 0.964

Batch 20350 Train Loss: 0.003 Train Accuracy: 1.000

Batch 20400 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 75 Batch 20400 ...
Validation loss = 0.172
Validation Accuracy = 0.968

Batch 20450 Train Loss: 0.002 Train Accuracy: 1.000

Batch 20500 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 76 Batch 20500 ...
Validation loss = 0.194
Validation Accuracy = 0.965

Batch 20550 Train Loss: 0.000 Train Accuracy: 1.000

Batch 20600 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 76 Batch 20600 ...
Validation loss = 0.155
Validation Accuracy = 0.964

Batch 20650 Train Loss: 0.000 Train Accuracy: 1.000

Batch 20700 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 77 Batch 20700 ...
Validation loss = 0.164
Validation Accuracy = 0.967

Batch 20750 Train Loss: 0.001 Train Accuracy: 1.000

Batch 20800 Train Loss: 0.003 Train Accuracy: 1.000

valid writing
EPOCH 77 Batch 20800 ...
Validation loss = 0.204
Validation Accuracy = 0.959

Batch 20850 Train Loss: 0.001 Train Accuracy: 1.000

Batch 20900 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 77 Batch 20900 ...
Validation loss = 0.175
Validation Accuracy = 0.966

Batch 20950 Train Loss: 0.002 Train Accuracy: 1.000

Batch 21000 Train Loss: 0.001 Train Accuracy: 1.000

valid writing
EPOCH 78 Batch 21000 ...
Validation loss = 0.125
Validation Accuracy = 0.969

Batch 21050 Train Loss: 0.001 Train Accuracy: 1.000

Batch 21100 Train Loss: 0.004 Train Accuracy: 1.000

valid writing
EPOCH 78 Batch 21100 ...
Validation loss = 0.137
Validation Accuracy = 0.971

Batch 21150 Train Loss: 0.001 Train Accuracy: 1.000

Batch 21200 Train Loss: 0.007 Train Accuracy: 1.000

valid writing
EPOCH 78 Batch 21200 ...
Validation loss = 0.166
Validation Accuracy = 0.968

Batch 21250 Train Loss: 0.000 Train Accuracy: 1.000

Batch 21300 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 79 Batch 21300 ...
Validation loss = 0.194
Validation Accuracy = 0.965

Batch 21350 Train Loss: 0.000 Train Accuracy: 1.000

Batch 21400 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 79 Batch 21400 ...
Validation loss = 0.159
Validation Accuracy = 0.963

Batch 21450 Train Loss: 0.001 Train Accuracy: 1.000

Batch 21500 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 80 Batch 21500 ...
Validation loss = 0.152
Validation Accuracy = 0.967

Batch 21550 Train Loss: 0.000 Train Accuracy: 1.000

Batch 21600 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 80 Batch 21600 ...
Validation loss = 0.234
Validation Accuracy = 0.959

Batch 21650 Train Loss: 0.000 Train Accuracy: 1.000

Batch 21700 Train Loss: 0.000 Train Accuracy: 1.000

valid writing
EPOCH 80 Batch 21700 ...
Validation loss = 0.206
Validation Accuracy = 0.959

Batch 21750 Train Loss: 0.000 Train Accuracy: 1.000

Training Done
In [13]:
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint(checkpoint_dir))
    train_loss, train_accuracy = evaluate(X_train, y_train)
    print("Train Loss = {:.3f}".format(train_loss))
    print("Train Accuracy = {:.3f}".format(train_accuracy))
    valid_loss, valid_accuracy = evaluate(X_valid, y_valid)
    print("Valid Loss = {:.3f}".format(valid_loss))
    print("Valid Accuracy = {:.3f}".format(valid_accuracy))
    test_loss, test_accuracy = evaluate(X_test, y_test)
    print("Test Loss = {:.3f}".format(test_loss))
    print("Test Accuracy = {:.3f}".format(test_accuracy))
Train Loss = 0.001
Train Accuracy = 1.000
Valid Loss = 0.206
Valid Accuracy = 0.959
Test Loss = 0.263
Test Accuracy = 0.955
In [14]:
def corr_predict(X_data, y_data):
    num_examples = len(X_data)
    corr_predict = np.zeros(num_examples)
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        temp = np.array(sess.run([correct_prediction], feed_dict={x: batch_x, y: batch_y, keep_prob:1.0})).astype('Int32')
        corr_predict[offset:offset+BATCH_SIZE] = temp
    return corr_predict
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint(checkpoint_dir))
    valid_corr_pred = corr_predict(X_valid,y_valid)
    test_corr_pred = corr_predict(X_test,y_test)
In [15]:
fig = plt.figure()
fig.set_size_inches(15,3)
ax = fig.add_subplot(111)
sns.countplot(y_valid[valid_corr_pred == 0])
ax.set_title('Distribution of misclassified classes in valid data', fontsize=20)
fig.savefig('./report_images/misclassify_valid.png')

fig = plt.figure()
fig.set_size_inches(15,3)
ax = fig.add_subplot(111)
sns.countplot(y_test[test_corr_pred == 0])
ax.set_title('Distribution of misclassified classes in test data', fontsize=20)
fig.savefig('./report_images/misclassify_test.png')

As shown above, the distribution of misclassified classes is not quite the same for the valid and test data.


Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images

In [17]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import matplotlib.image as mpmg
web_img_dir = './web_images'
num_images = len(os.listdir(web_img_dir))
orig_images = np.zeros([num_images,32,32,3])
web_images = np.zeros([num_images,32,32,3])

plt.figure(figsize=(15,5))
# read in and resize all the images
for i, file in enumerate(os.listdir(web_img_dir)):
    img = mpmg.imread(os.path.join(web_img_dir, file))
    img = cv2.resize(img,(32,32))
    orig_images[i] = img
    plt.subplot(1,5,i+1)
    plt.imshow(img)
    new_img = hist_eq(img)
    new_img = normalize(new_img)
    web_images[i] = new_img

Predict the Sign Type for Each Image

In [ ]:
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
In [69]:
def make_prediction(X_data):
    num_examples = len(X_data)
    predicts = np.zeros(num_examples)
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x = X_data[offset:offset+BATCH_SIZE]
        batch_predictions = np.array(sess.run([predictions], feed_dict={x: batch_x, keep_prob:1.0})).astype('Int32')
        predicts[offset:offset+BATCH_SIZE] = batch_predictions
    return predicts

with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint(checkpoint_dir))
    preds = make_prediction(web_images)

for i in range(web_images.shape[0]):
    print('Predicted label is {}, {}'.format(preds[i], signnames.iloc[int(preds[i]),1]))
    plt.figure()
    plt.imshow(orig_images[i].astype('uint8'))
    plt.show()
Predicted label is 14.0, Stop
Predicted label is 18.0, General caution
Predicted label is 25.0, Road work
Predicted label is 11.0, Right-of-way at the next intersection
Predicted label is 40.0, Roundabout mandatory

Analyze Performance

In [ ]:
### Calculate the accuracy for these 5 new images. 
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.

The model predicts all the five images correctly!!

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

In [ ]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
### Feel free to use as many code cells as needed.
In [18]:
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint(checkpoint_dir))
    a = np.array(sess.run([logits], feed_dict={x: web_images, keep_prob:1.0})).squeeze()
    a = a.T
    a = np.exp(a) / np.sum(np.exp(a), 0)
    a = a.T
    topK = sess.run(tf.nn.top_k(tf.constant(a), k=5))
    print(topK)
TopKV2(values=array([[  9.99988258e-01,   9.18809928e-06,   1.40822704e-06,
          5.26875112e-07,   4.42988267e-07],
       [  1.00000000e+00,   1.26160515e-09,   1.75527926e-11,
          1.48009345e-13,   2.29485434e-14],
       [  1.00000000e+00,   3.51171039e-08,   5.81386606e-11,
          2.22470289e-11,   1.29399410e-12],
       [  1.00000000e+00,   1.32926337e-09,   8.84456952e-10,
          7.89338428e-10,   2.46590054e-10],
       [  9.66366351e-01,   3.22216712e-02,   1.40586623e-03,
          5.83044221e-06,   2.59219632e-07]], dtype=float32), indices=array([[14, 17, 29,  0,  1],
       [18, 26, 27, 39, 11],
       [25, 22, 29, 31, 30],
       [11, 30, 27,  5, 40],
       [37, 40, 38, 12, 20]], dtype=int32))

As shown above, the model makes predictions with fairly high confidence, as all the first label probabilities are close to 1.

Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.


Step 4 (Optional): Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Combined Image

Your output should look something like this (above)

In [52]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
    with tf.Session() as sess:
        saver.restore(sess, tf.train.latest_checkpoint(checkpoint_dir))  
        activation = tf_activation.eval(session=sess, feed_dict={x:image_input, keep_prob:1.0})
    featuremaps = activation.shape[3]
    fig = plt.figure(plt_num, figsize=(15,3))
    for featuremap in range(featuremaps):
        plt.subplot(2,8, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
    fig.savefig('./report_images/row_sign_conv2.png')
    plt.show()
In [47]:
sns.set_style("whitegrid", {'axes.grid' : False})
outputFeatureMap(np.expand_dims(web_images[4],0), conv1, activation_min=-1, activation_max=-1 ,plt_num=1)
In [49]:
outputFeatureMap(np.expand_dims(web_images[4],0), conv2, activation_min=-1, activation_max=-1 ,plt_num=1)
In [51]:
sns.set_style("whitegrid", {'axes.grid' : False})
outputFeatureMap(np.expand_dims(web_images[3],0), conv1, activation_min=-1, activation_max=-1 ,plt_num=1)
In [53]:
outputFeatureMap(np.expand_dims(web_images[3],0), conv2, activation_min=-1, activation_max=-1 ,plt_num=1)